29,385 research outputs found

    Acceleration and Deceleration in Curvature Induced Phantom Model of the Late and Future Universe, Cosmic Collapse as Well as its Quantum Escape

    Full text link
    Here, cosmology of the late and future universe is obtained from f(R)f(R)-gravity with non-linear curvature terms R2R^2 and R3R^3 (RR being the Ricci scalar curvature). It is different from f(R)f(R)-dark enrgy models, where non-linear curvature terms are taken as gravitational alternative of dark energy. In the present model, neither linear nor no-linear curvature terms are taken as dark energy. Rather, dark energy terms are induced by curvature terms in the Friedmann equation derived from f(R)f(R)-gravitational equations. It has advantage over f(R)f(R)- dark energy models in the sense that the present model satisfies WMAP results and expands as t2/3\sim t^{2/3} during matter-dominance. So, it does not have problems due to which f(R)f(R)-dark energy models are criticized. Curvature-induced dark energy, obtained here, mimics phantom. Different phases of this model, including acceleration and deceleration during phantom phase, are investigated here.It is found that expansion of the universe will stop at the age (3.87t0+694.4kyr)(3.87 t_0 + 694.4 {\rm kyr}) (t0t_0 being the present age of the universe) and after this epoch, it will contract and collapse by the time (336.87t0+694.4kyr)(336.87 t_0 + 694.4 {\rm kyr}). Further,it is shown that universe will escape predicted collapse (obtained using classical mechanics) on making quantum gravity corrections relevant near collapse time due to extremely high energy density and large curvature analogous to the state of very early universe. Interestingly, cosmological constant is also induced here, which is very small in classical domain, but very high in quantum domain.Comment: 33 page

    Curvature Inspired Cosmological Scenario

    Full text link
    Using modified gravity with non-linear terms of curvature, R2R^2 and R(r+2)R^{(r +2)} (with rr being the positive real number and RR being the scalar curvature), cosmological scenario,beginning at the Planck scale, is obtained. Here, a unified picture of cosmology is obtained from f(R)f(R)- gravity. In this scenario, universe begins with power-law inflation, followed by deceleration and acceleration in the late universe as well as possible collapse of the universe in future. It is different from f(R)f(R)- dark energy models with non-linear curvature terms assumed as dark energy. Here, dark energy terms are induced by linear as well as non-linear terms of curvature in Friedmann equation being derived from modified gravity.It is also interesting to see that, in this model, dark radiation and dark matter terms emerge spontaneously from the gravitational sector. It is found that dark energy, obtained here, behaves as quintessence in the early universe and phantom in the late universe. Moreover, analogous to brane-tension in brane-gravity inspired Friedmann equation, a tension term λ\lambda arises here being called as cosmic tension. It is found that, in the late universe, Friedmann equation (obtained here) contains a term ρ2/2λ- \rho^2/2\lambda (ρ\rho being the phantom energy density) analogous to a similar term in Friedmann equation with loop quantum effects, if λ>0\lambda > 0 and brane-gravity correction when λ<0.\lambda < 0.Comment: 19 Pages. To appear in Int. J. Thro. Phy

    Integrating nano-logic into an undergraduate logic design course

    Get PDF
    The goal of this work is to motivate our students and enhance their ability to address newer logic blocks namely majority gates in the existing framework. We use a K-map based methodology to introduce a few novel nano-logic design concepts for the undergraduate logic design class. We want them to possess knowledge about a few fundamental abstracted logical behaviors of future nano-devices and their functionality which in turn would motivate them to further investigate these non-CMOS emerging devices, logics and architectures. This would augment critical thinking of the students where they apply the learnt knowledge to a novel/unfamiliar situation. We intend to augment the existing standard EE and CS courses by inserting K-map based knowledge modules on nano-logic structure for stimulating their interest without significant diversion from the course framework. Experiments with our students show that all the students were able to grasp the basic concept of majority logic synthesis and almost 63 of them had a deeper understanding of the synthesis algorithm demonstrated to them

    Hierarchical probabilistic macromodeling for QCA circuits

    Get PDF
    With the goal of building an hierarchical design methodology for quantum-dot cellular automata (QCA) circuits, we put forward a novel, theoretically sound, method for abstracting the behavior of circuit components in QCA circuit, such as majority logic, lines, wire-taps, cross-overs, inverters, and corners, using macromodels. Recognizing that the basic operation of QCA is probabilistic in nature, we propose probabilistic macromodels for standard QCA circuit elements based on conditional probability characterization, defined over the output states given the input states. Any circuit model is constructed by chaining together the individual logic element macromodels, forming a Bayesian network, defining a joint probability distribution over the whole circuit. We demonstrate three uses for these macromodel-based circuits. First, the probabilistic macromodels allow us to model the logical function of QCA circuits at an abstract level - the "circuit" level - above the current practice of layout level in a time and space efficient manner. We show that the circuit level model is orders of magnitude faster and requires less space than layout level models, making the design and testing of large QCA circuits efficient and relegating the costly full quantum-mechanical simulation of the temporal dynamics to a later stage in the design process. Second, the probabilistic macromodels abstract crucial device level characteristics such as polarization and low-energy error state configurations at the circuit level. We demonstrate how this macromodel-based circuit level representation can be used to infer the ground state probabilities, i.e., cell polarizations, a crucial QCA parameter. This allows us to study the thermal behavior of QCA circuits at a higher level of abstraction. Third, we demonstrate the use of these macromodels for error analysis. We show that low-energy state configurations of the macromodel circuit match those of the layout level, thus allowing us to isolate weak p- oints in circuits design at the circuit level itsel

    Bayesian macromodeling for circuit level QCA design

    Get PDF
    We present a probabilistic methodology to model and abstract the behavior of quantum-dot cellular automata circuit(QCA) at “ circuit level” above the current practice of layout level. These macromodels provide input-output relationship of components (a set of QCA cells emulating a logical function) that are faithful to the underlying quantum effects. We show the macromodeling of a few key circuit components in QCA circuit, such as majority logic, lines, wire-taps, cross-overs, inverters, and corners. In this work, we demostrate how we can make use of these macromodels to abstract the logical function of QCA circuits and to extract crucial device level characteristics such as polarization and low-energy error state configurations by circuit level Bayesian model, accurately accounting for temperature and other device level parameters. We also demonstrate how this macromodel based design can be used effectively in analysing and isolating the weak spots in the design at circuit level itself

    Integrating a nanologic knowledge module Into an undergraduate logic design course

    Get PDF
    This work discusses a knowledge module in an undergraduate logic design course for electrical engineering (EE) and computer science (CS) students, that introduces them to nanocomputing concepts. This knowledge module has a twofold objective. First, the module interests students in the fundamental logical behavior and functionality of the nanodevices of the future, which will motivate them to enroll in other elective courses related to nanotechnology, offered in most EE and CS departments. Second, this module can be used to let students analyze, synthesize, and apply their existing knowledge of the Karnaugh-map-based Boolean logic reduction scheme into a revolutionary design context with majority logic. Where many efforts focus on developing new courses on nanofabrication and even nanocomputing, this work is designed to augment the existing standard EE and CS courses by inserting knowledge modules on nanologic structures so as to stimulate student interest without creating a significant diversion from the course framework

    Work in progress: introduction of K-map based nano-logic synthesis as knowledge module in logic design course

    Get PDF
    This work in progress reports an effort of introducing knowledge module regarding novel nano-devices and novel logic primitives in undergraduate logic design class. Our motivation is to make our students aware of fundamental abstracted logical behaviors of future nano-devices, their functionality. This effort would also help the students use their existing knowledge of K-map based logical synthesis into constructing logic blocks for novel devices that uses majority logic as basic construct. Moreover, additional to stimulating our students' interests, we are also augmenting their learning by challenging them to use their existing knowledge to analyze, synthesize and comprehend novel nano-logic issues through the worksheets and lecture modules. Whereas many efforts are focusing on developing new courses on nanofabrication and even nano-computing, we intend to augment the existing standard EE and CS courses by inserting knowledge modules on nano-logic structure for stimulating their interest without significant diversion from the course framework

    Means and method for calibrating a photon detector utilizing electron-photon coincidence

    Get PDF
    An arrangement for calibrating a photon detector particularly applicable for the ultraviolet and vacuum ultraviolet regions is based on electron photon coincidence utilizing crossed electron beam atom beam collisions. Atoms are excited by electrons which lose a known amount of energy and scatter with a known remaining energy, while the excited atoms emit photons of known radiation. Electrons of the known remaining energy are separated from other electrons and are counted. Photons emitted in a direction related to the particular direction of scattered electrons are detected to serve as a standard. Each of the electrons is used to initiate the measurements of a time interval which terminates with the arrival of a photon exciting the photon detector. Only the number of time intervals related to the coincidence correlation and of electrons scattered in the particular direction with the known remaining energy and photons of a particular radiation level emitted due to the collisions of such scattered electrons are counted. The detector calibration is related to the number of counted electrons and photons
    corecore